Markov logic networks (MLNs) reconcile two opposing schools in machinelearning and artificial intelligence: causal networks, which account foruncertainty extremely well, and first-order logic, which allows for formaldeduction. An MLN is essentially a first-order logic template to generateMarkov networks. Inference in MLNs is probabilistic and it is often performedby approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling.An MLN has many regular, symmetric structures that can be exploited at bothfirst-order level and in the generated Markov network. We analyze the graphstructures that are produced by various lifting methods and investigate theextent to which quantum protocols can be used to speed up Gibbs sampling withstate preparation and measurement schemes. We review different such approaches,discuss their advantages, theoretical limitations, and their appeal toimplementations. We find that a straightforward application of a recent resultyields exponential speedup compared to classical heuristics in approximateprobabilistic inference, thereby demonstrating another example where advancedquantum resources can potentially prove useful in machine learning.
展开▼